17 resultados para BENCHMARKING (ADMINISTRACION)

em CentAUR: Central Archive University of Reading - UK


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is an initial work towards developing an e-Government benchmarking model that is user-centric. To achieve the goal then, public service delivery is discussed first including the transition to online public service delivery and the need for providing public services using electronic media. Two major e-Government benchmarking methods are critically discussed and the need to develop a standardized benchmarking model that is user-centric is presented. To properly articulate user requirements in service provision, an organizational semiotic method is suggested.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes benchmark testing of six two-dimensional (2D) hydraulic models (DIVAST, DIVASTTVD, TUFLOW, JFLOW, TRENT and LISFLOOD-FP) in terms of their ability to simulate surface flows in a densely urbanised area. The models are applied to a 1·0 km × 0·4 km urban catchment within the city of Glasgow, Scotland, UK, and are used to simulate a flood event that occurred at this site on 30 July 2002. An identical numerical grid describing the underlying topography is constructed for each model, using a combination of airborne laser altimetry (LiDAR) fused with digital map data, and used to run a benchmark simulation. Two numerical experiments were then conducted to test the response of each model to topographic error and uncertainty over friction parameterisation. While all the models tested produce plausible results, subtle differences between particular groups of codes give considerable insight into both the practice and science of urban hydraulic modelling. In particular, the results show that the terrain data available from modern LiDAR systems are sufficiently accurate and resolved for simulating urban flows, but such data need to be fused with digital map data of building topology and land use to gain maximum benefit from the information contained therein. When such terrain data are available, uncertainty in friction parameters becomes a more dominant factor than topographic error for typical problems. The simulations also show that flows in urban environments are characterised by numerous transitions to supercritical flow and numerical shocks. However, the effects of these are localised and they do not appear to affect overall wave propagation. In contrast, inertia terms are shown to be important in this particular case, but the specific characteristics of the test site may mean that this does not hold more generally.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Routine milk recording data, often covering many years, are available for approximately half the dairy herds of England and Wales. In addition to milk yield and quality, these data include production events that can be used to derive objective Key Performance Indicators (KPI) describing a herd's fertility and production. Recent developments in information systems give veterinarians and other technical advisers access to these KPIs on-line. In addition to reviewing individual herd performance, advisers can establish local benchmark groups to demonstrate the relative performance of similar herds in the vicinity. The use of existing milk recording data places no additional demands on farmer's time or resources. These developments could also readily be exploited by universities to introduce veterinary undergraduates to the realities of commercial dairy production.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Selecting the highest quality 3D model of a protein structure from a number of alternatives remains an important challenge in the field of structural bioinformatics. Many Model Quality Assessment Programs (MQAPs) have been developed which adopt various strategies in order to tackle this problem, ranging from the so called "true" MQAPs capable of producing a single energy score based on a single model, to methods which rely on structural comparisons of multiple models or additional information from meta-servers. However, it is clear that no current method can separate the highest accuracy models from the lowest consistently. In this paper, a number of the top performing MQAP methods are benchmarked in the context of the potential value that they add to protein fold recognition. Two novel methods are also described: ModSSEA, which based on the alignment of predicted secondary structure elements and ModFOLD which combines several true MQAP methods using an artificial neural network. Results: The ModSSEA method is found to be an effective model quality assessment program for ranking multiple models from many servers, however further accuracy can be gained by using the consensus approach of ModFOLD. The ModFOLD method is shown to significantly outperform the true MQAPs tested and is competitive with methods which make use of clustering or additional information from multiple servers. Several of the true MQAPs are also shown to add value to most individual fold recognition servers by improving model selection, when applied as a post filter in order to re-rank models. Conclusion: MQAPs should be benchmarked appropriately for the practical context in which they are intended to be used. Clustering based methods are the top performing MQAPs where many models are available from many servers; however, they often do not add value to individual fold recognition servers when limited models are available. Conversely, the true MQAP methods tested can often be used as effective post filters for re-ranking few models from individual fold recognition servers and further improvements can be achieved using a consensus of these methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Supplier selection has a great impact on supply chain management. The quality of supplier selection also affects profitability of organisations which work in the supply chain. As suppliers can provide variety of services and customers demand higher quality of service provision, the organisation is facing challenges for making the right choice of supplier for the right needs. The existing methods for supplier selection, such as data envelopment analysis (DEA) and analytical hierarchy process (AHP) can automatically perform selection of competitive suppliers and further decide winning supplier(s). However, these methods are not capable of determining the right selection criteria which should be derived from the business strategy. An ontology model described in this paper integrates the strengths of DEA and AHP with new mechanisms which ensure the right supplier to be selected by the right criteria for the right customer's needs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose – The paper addresses the practical problems which emerge when attempting to apply longitudinal approaches to the assessment of property depreciation using valuation-based data. These problems relate to inconsistent valuation regimes and the difficulties in finding appropriate benchmarks. Design/methodology/approach – The paper adopts a case study of seven major office locations around Europe and attempts to determine ten-year rental value depreciation rates based on a longitudinal approach using IPD, CBRE and BNP Paribas datasets. Findings – The depreciation rates range from a 5 per cent PA depreciation rate in Frankfurt to a 2 per cent appreciation rate in Stockholm. The results are discussed in the context of the difficulties in applying this method with inconsistent data. Research limitations/implications – The paper has methodological implications for measuring property investment depreciation and provides an example of the problems in adopting theoretically sound approaches with inconsistent information. Practical implications – Valuations play an important role in performance measurement and cross border investment decision making and, therefore, knowledge of inconsistency of valuation practice aids decision making and informs any application of valuation-based data in the attainment of depreciation rates. Originality/value – The paper provides new insights into the use of property market valuation data in a cross-border context, insights that previously had been anecdotal and unproven in nature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Herd Companion uses routine milk‐recording records to generate twelve‐month rolling averages that indicate performance trends. This article looks at Herd Somatic Cell Count (SCC) and four other SCC‐related parameters from 252 National Milk Records (NMR) recorded herds to assess how each parameter correlates with the Herd SCC. The analysis provides evidence for the importance of targeting individual cows with high SCC recordings (>200,000 cells/ml and >500,000 cells/ml) and/or individual cows with repeatedly high SCC recordings (chronic high SCC) and/or cows that begin lactation with a high SCC recording (dry period infection) in order to achieve bulk milk Herd SCC below 200,000 cells/ml.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

If secondary structure predictions are to be incorporated into fold recognition methods, an assessment of the effect of specific types of errors in predicted secondary structures on the sensitivity of fold recognition should be carried out. Here, we present a systematic comparison of different secondary structure prediction methods by measuring frequencies of specific types of error. We carry out an evaluation of the effect of specific types of error on secondary structure element alignment (SSEA), a baseline fold recognition method. The results of this evaluation indicate that missing out whole helix or strand elements, or predicting the wrong type of element, is more detrimental than predicting the wrong lengths of elements or overpredicting helix or strand. We also suggest that SSEA scoring is an effective method for assessing accuracy of secondary structure prediction and perhaps may also provide a more appropriate assessment of the “usefulness” and quality of predicted secondary structure, if secondary structure alignments are to be used in fold recognition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Commercial kitchens often leave a large carbon footprint. A new dataset of energy performance metrics from a leading industrial partner is presented. Categorising these types of buildings is challenging. Electricity use has been analysed using data from automated meter readings (AMR) for the purpose of benchmarking and discussed in terms of factors such as size and food output. From the analysed results, consumption is found to be almost double previous sector estimates of 6480 million kWh per year. Recommendations are made to further improve the current benchmarks in order to attain robust, reliable and transparent figures, such as the introduction of normalised performance indicators to include kitchen size (m2) and kWh per thousand-pound turnover.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Commercial kitchens are one of the most profligate users of gas, water and electricity in the UK and can leave a large carbon footprint. It is estimated that the total energy consumption of Britain’s catering industry is in excess of 21,600 million kWh per year. In order to facilitate appropriate energy reduction within licensed restaurants, energy use must be translated into a form that can be compared between kitchens to enable operators to assess how they are improving and to allow rapid identification of facilities which require action. A review of relevant literature is presented and current benchmarking methods are discussed in order to assist in the development and categorisation of benchmarking energy reduction in commercial kitchens. Energy use within UK industry leading brands is discussed for the purpose of benchmarking in terms of factors such as size and output.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover, composition and 5 height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, 10 and are compared to scores based on the temporal or spatial mean value of the observations and a “random” model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM), and the Lund-Potsdam-Jena (LPJ) and Land Processes and eXchanges (LPX) dynamic global 15 vegetation models (DGVMs). SDBM reproduces observed CO2 seasonal cycles, but its simulation of independent measurements of net primary production (NPP) is too high. The two DGVMs show little difference for most benchmarks (including the interannual variability in the growth rate and seasonal cycle of atmospheric CO2), but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified 20 several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change 25 impacts and feedbacks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This new survey, which has just been completed and includes brand new data, has been funded by the RICS Education Trust and the European Shopping Centre Trust. It follows up our 2000 survey of UK retailers, investors and developers. The report presents results from our new 2001 survey. This continuing benchmark series of studies includes an extensive review of developments in ecommerce and retail in Europe and the USA. The survey reveals a cooling in attitude towards ecommerce in the UK, but there is rapid growth in some sectors and polarisation and marginalisation of secondary centres are likely to increase. In Europe the growth of a three tier system of ecommerce 'pioneers', 'followers' and 'laggards' is becoming established, and the research also reveals results from a recent joint survey on US and UK retailers conducted with Colorado State University. There is a danger of complacency as UK online sales (in percentage terms) now outstrip USA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Past climates provide a test of models’ ability to predict climate change. We present a comprehensive evaluation of state-of-the-art models against Last Glacial Maximum and mid-Holocene climates, using reconstructions of land and ocean climates and simulations from the Palaeoclimate Modelling and Coupled Modelling Intercomparison Projects. Newer models do not perform better than earlier versions despite higher resolution and complexity. Differences in climate sensitivity only weakly account for differences in model performance. In the glacial, models consistently underestimate land cooling (especially in winter) and overestimate ocean surface cooling (especially in the tropics). In the mid-Holocene, models generally underestimate the precipitation increase in the northern monsoon regions, and overestimate summer warming in central Eurasia. Models generally capture large-scale gradients of climate change but have more limited ability to reproduce spatial patterns. Despite these common biases, some models perform better than others.